β ˆ j, and the SD path uses the local gradient
|
|
- Owen Joseph
- 6 years ago
- Views:
Transcription
1 Proceeings of the 00 Winter Simulation Conference E. Yücesan, C.-H. Chen, J. L. Snowon, an J. M. Charnes, es. RESPONSE SURFACE METHODOLOGY REVISITED Ebru Angün Jack P.C. Kleijnen Department of Information Management/Center for Economic Research (CentER) School of Economics an Business Aministration Tilburg University 5000 LE Tilburg, THE NETHERLANDS Dick Den Hertog Gül Gürkan Department of Econometrics an Operations Research /Center for Economic Research (CentER) School of Economics an Business Aministration Tilburg University 5000 LE Tilburg, THE NETHERLANDS ABSTRACT Response Surface Methoology (RSM) searches for the input combination that optimizes the simulation output. RSM treats the simulation moel as a black box. Moreover, this paper assumes that simulation requires much computer time. In the first stages of its search, RSM locally fits first-orer polynomials. Next, classic RSM uses steepest escent (SD); unfortunately, SD is scale epenent. Therefore, Part 1 of this paper erives scale inepenent aapte SD (ASD) accounting for covariances between components of the local graient. Monte Carlo experiments show that ASD inee gives a better search irection than SD. Part consiers multiple outputs, optimizing a stochastic objective function uner stochastic an eterministic constraints. This part uses interior point methos an binary search, to erive a scale inepenent search irection an several step sizes in that irection. Monte Carlo examples emonstrate that a neighborhoo of the true optimum can inee be reache, in a few simulation runs. 1 INTRODUCTION RSM was invente by Box an Wilson (1951) for fining the input combination that minimizes the output of a real, non-simulate system. They ignore constraints. Also see recent publications such as Box (1999), Khuri an Cornell (1996), Myers (1999), an Myers an Montgomery (1995). Later on, RSM was also applie to ranom simulation moels, treating these moels as black boxes (a black box means that there is no graient information available; see Spall (1999)). Classic articles are Donohue, Houck, an Myers (1993, 1995); recent publications are Irizarry, Wilson, an Trevino (001), Kleijnen (1998), Law an Kelton (000, pp ), Neermeijer et al. (000), an Safizaeh (00). Technically, RSM is a stagewise heuristic that searches through various local (sub)areas of the global area in which the simulation moel is vali. We focus on the first stage, which fits first-orer polynomials in the inputs, per local area. This fitting uses Orinary Least Squares (OLS) an estimates the SD path, as follows. Let j enote the value of the original (nonstanarize) input j with j = 1,..., k. Hence k main or first-orer effects (say) β j are to be estimate in the local first-orer polynomial approximation. For this estimation, classic RSM uses resolution-3 esigns, which specify the n k +1 input combinations to be simulate. (Spall (1999) proposes to simulate only two combinations in his simultaneous perturbation stochastic approximation or SPSA.) These input/output (I/O) combinations give the OLS estimates β ˆ j, an the SD path uses the local graient ( ˆ1,..., ˆ β β k ). Unfortunately, RSM suffers from two well-known problems; see Myers an Montgomery (1995, pp ): (i) SD is scale epenent; (ii) the step size along the SD path is selecte intuitively. For example, in a case stuy, Kleijnen (1993) uses a step size that oubles the most important input. Our research contribution is the following. In Part 1 ( -3) we erive ASD; that is, we ajust the estimate first-orer factor effects through their estimate covariance matrix. We prove that ASD is scale inepenent. In most of our Monte Carlo experiments with simple test functions, ASD inee gives a better search irection. Note that we examine only the search irection, not the other elements of classic RSM. In Part ( 4-5) we consier multiple outputs, whereas classic RSM assumes a single output. We optimize a stochastic objective function uner multiple stochastic an eterministic constraints. We erive a scale inepenent search irection inspire by interior point
2 methos - an several step sizes - inspire by binary search. This search irection namely scale an projecte SD, is a generalization of classic RSM s SD. We then combine these search irection an step sizes into an iterative heuristic. Notice that if there are no bining constraints at the optimum, then classic RSM combine with ASD might suffice. The remainer of this paper is organize as follows. For the unconstraine problem, erives ASD, its mathematical properties, an its interpretation. 3 compares the search irections SD an ASD, by means of Monte Carlo experiments. 4 erives a novel heuristic combining a search irection an a step size proceure for constraine problems. 5 stuies the performance of the novel heuristic by means of Monte Carlo experiments. 6 gives conclusions. Note that this paper summarizes two separate papers, namely Kleijnen, Den Hertog, an Angün (00), an Angün et al. (00), which give all mathematical proofs an aitional experimental results. ADAPTED STEEPEST DESCENT RSM uses the following approximation: k y = β0 + β j j + e (1) j =1 where y enotes the preictor of the expecte simulation output; e enotes the noise consisting of intrinsic noise cause by the simulation s pseuo-ranom numbers (PRN) plus lack of fit. RSM assumes white noise; that is, e is normally, ientically, an inepenently istribute with zero mean µ e an constant variance σ e. The OLS estimator of the q = k + 1 parameters (..., ) β = β 0, β k in (1) is with β ˆ -1 = ( X X ) X w () X : N q matrix of explanatory variables incluing the ummy variable with constant value 1; X is assume to have linearly inepenent columns N = i n = 1 mi : number of simulation runs m i : number of replicates at input combination i, with m i N mi > 0 n : number of ifferent, simulate input combinations with n N n q w : vector with N simulation outputs w i; r ( r = 1,...,mi ). The noise in (1) may be estimate through the mean square resiual (MSR): n mi ( ˆ wi; r - yi ) 1 1 σˆ = i= r= e ( N - q) where yˆ i follows from (1) an (): yˆ k i = βˆ + βˆ j i; j. 0 j = 1 Kleijnen et al. (00) erives the esign point that minimizes the variance of the regression preictor: 1 0 = C b where σ e C is the covariance matrix of βˆ -0 which equals βˆ excluing the intercept β : ˆ0 (3) a b cov = σe = σe ( βˆ) ( X X ) -1 (4) b C where a is a scalar, b a k -imensional vector, an C a k k matrix. Now we consier the one-sie 1 α confience interval ranging from to ˆ x x βˆ -1 y ( ) = α max +t N - q σˆ e x ( X X ) x (5) 1, an t N α q enotes the 1 α of the t istribution with N q egrees of freeom. where x = ( ) + ASD selects, the esign point that minimizes the maximum output preicte through (5) (this gives both a search irection an a step size). Kleijnen et al. (00) erives + = C -1 b - λc -1 ˆβ -0 (6a) where C 1 b is erive from (4), -1 C β ˆ is the ASD -0 irection, an λ is the step size: λ= ( α t N - q a - b -1 C b. ˆ ) ˆ -1 σe - β ˆ -0 C β-0 (6b) We mention the following mathematical properties an interpretations of ASD. The first term in (6a) means that the ASD path starts from the point with minimal preictor variance. The secon
3 term means that the classic SD irection βˆ (secon -0 term s last factor) is ajuste for the covariance matrix of βˆ ; see C in (4). The step size λ is quantifie in (6b). -0 Kleijnen et al. (00) proves that ASD is scale inepenent. In case of large signal/noise ratios βˆ / var( βˆ ), the j j enominator uner the square root in (6b) is negative so (6) oes not give a finite solution for +. Inee, if the noise is negligible, we have a eterministic problem, which our technique is not meant to aress (many other researchers - incluing Conn et al. (000) - stuy optimization of eterministic simulation moels). In case of a small signal/noise ratio, no step is taken. Kleijnen et al (00) further iscusses two subcases: (i) the signal is small; (ii) the noise is big. 3 COMPARISON OF ASD AND SD THROUGH MONTE CARLO EXPERIMENTS To compare the ASD an SD search irections, we perform Monte Carlo experiments. The Monte Carlo metho is an efficient an effective way to estimate the behavior of search techniques applie to ranom simulations; see Kleijnen et al. (00). We limit the example to two inputs, so k =. We generate the simulation output w through a secon-orer polynomial with white noise: w= β + β + β + β + β ;1 1 ; + β1; 1 + e. The response surface (7) hols for the global area, for which we take the unit square: an 1 1. (We have alreay seen that RSM fits firstorer polynomials locally.) In the local area we use a one-at-a-time esign, because this esign is non-orthogonal - an in practice esigns in the original inputs are not orthogonal (see Kleijnen et al. (00)). The specific local area is in the lower corner of Figure 1. To enable the computation of the MSR, we simulate one input combination twice: m1 =. We consier a specific case of (7): β 0 = β 1 = β = 0, β 1; =, β 1; 1 = -, β ; = -1, so the contour functions (for example, iso-cost curves) form ellipsois tilte relative to the 1 an axes. Hence, (7) has as its true optimum = ( 0, 0). After fitting a first-orer polynomial, we estimate the SD an ASD paths starting from 0 = (0.85, -0.95), explaine above (4). (7) Figure 1: Tilte Ellipsoi Contours E ( w, ) 1 with Global an Local Experimental Areas In this Monte Carlo experiment we know the truly optimal search irection, namely the vector (say) g that an ens at the true optimum ( ) starts at 0 0, 0. So we compute the angle (say) θˆ between the true search irection g an the estimate search irection p : ˆ g p θ = arccos g. p (8) Obviously, the smaller θˆ is, the better the search technique performs. To estimate the istribution of θˆ efine in (8), we take 100 macro-replications. Figure shows a bunle of 100 p ' s aroun g when σ e = In each macroreplicate, we apply SD an ASD to the same I/O ata ( w, 1, ). We characterize the resulting empirical θˆ istribution through several statistics, namely its average, stanar eviation, an specific s; see Table 1. Figure : 100 ASD Search Directions p an the Truly Optimal Search Direction g (Marke by Thick Dots)
4 Table 1: Statistics in case of Interactions, for ASD an SD s Estimate Angle Error θˆ (in Degrees) Statistics σ e = 0.10 σ e = 0.5 ASD SD ASD SD Average Stanar eviation Meian (50% ) % % % % % % Further, we perform the Monte Carlo experiment for two noise values: σ e is 0.10 or 0.5. We use the same PRN for both values. In case of high noise, the estimate search irections may be very wrong. Nevertheless, ASD still performs better; see again Table 1. In general, ASD performs better than SD, unless we focus on outliers; see Kleijnen et al. (00). 4 MULTIPLE RESPONSES: INTERIOR POINT AND BINARY SEARCH APPROACH In Part 1, we assume a single response of interest enote by w in (). Now we consier a more realistic situation, namely the simulation generates multiple outputs. For example, an acaemic inventory simulation efines w in () as the sum of inventory-carrying, orering, an out-of-stock costs, whereas a practical simulation minimizes the sum of the expecte inventory-carrying an orering costs provie the service probability excees a pre-specifie value. In RSM, there have been several approaches to multiresponse optimization. Khuri (1996) surveys most of these approaches (incluing esirability functions, generalize istances, an ual responses). Angün et al. (00) iscusses rawbacks of these approaches. To overcome these rawbacks, we propose the following alternative base on mathematical programming. We select one of the responses as the objective an the remaining (say) z 1 responses as constraints. The SD search woul soon hit the bounary of the feasible area forme by these constraints, an woul then creep along this bounary. Instea, our search starts in the interior of the feasible area an avois the bounary; see Barnes (1986) on Karmarkar s algorithm for linear programming. Note that our approach has the aitional avantage of avoiing areas in which the simulation moel is not vali an may even crash. Formally, our problem becomes: minimize E( w 0 ( ) subject to E( w ( ) a for h = 1,..., z 1 h h (9) l u where is the vector of simulation inputs, l an u the eterministic lower an upper bouns on, a h the righthan-sie value for the h th constraint, an w h ( h = 0,..., z 1) is the response h. Note that probabilities (for example, service percentages) can be formulate as expecte values of inicator functions. Further, the multiple simulation responses are correlate, since they are estimate through the same PRN fe into the same simulation moel. As in Part 1, we locally fit a first-orer polynomial but now for each response; see (1). However, the noise e is now multi-variate normal still with zero means but now with covariance matrix (say) Σ. Yet, since the same esign is use for all z responses, the GLS estimator reuces to the OLS estimator; see Ruu (000, p. 703). Therefore we still use (), but to the symbol βˆ we a the subscript h. Further, for h, h = 0,, z 1 we estimate Σ through the analogue of (3): ( ˆ ) ( ˆ ) ˆ wh yh wh yh σh,h =. Ν (k + 1) We introuce = ( b1,..., bz 1 ), ( 1,..., z -1) (10) B where b h h = enotes the vector of OLS estimates βˆ -0; h (excluing the intercept βˆ0; h ) for the h th response. Aing slack vectors s, r, an v, we obtain minimize b0 subject to B s = c, + r = u, v = l, (11) s, r, v 0 enotes the vector of OLS estimates βˆ (ex- -0; 0 where b0 cluing the intercept βˆ ) for w, -0; 0 0 an c is the vector with components ch = ah β ˆ0; h ( h = 1,..., z -1). Through (11) we obtain a local linear approximation for (9). Then using ieas from interior point methos - more specifically the affine scaling metho - Angün et al. (00) erives the following search irection: ' 1 p = ( B S B + R + V ) b0 (1)
5 where S, R, an V are iagonal matrices with the current estimate slack vectors s, r,v > 0 on the iagonal. Obviously, R an V in (1) are known as soon as the eterministic input to the simulation is selecte; S in (1) is estimate from the simulation output, not from the local approximation. Unlike SD, the search irection (1) is scale inepenent (the inverse of the matrix within the parentheses in (1) scales an projects the estimate SD irection, b0 ). Having estimate a search irection for a specific starting point through (1), we must next select a step size. Actually, we run the simulation moel for several step sizes in that irection, as follows. First, we compute the maximum step size assuming that the local approximation (11) hols globally: where λ max = max{0,min{ λ1, λ, λ3}} λ1 = min{( ch bh ) / p bh : h {1,..., z 1}, p bh < 0} λ = min{( u j j )/ p j : j {1,, k}, p j > 0} λ3 = min{( l j j )/ p j : j {1,, k}, p j < 0}. To increase the probability of staying within the interior of the feasible region, we take only 80% of λ max as our maximum step size. The subsequent step sizes are inspire by binary search, as follows. We systematically halve the current step size along the search irection. At each step, we select as the best point the one with the minimum value for the simulation objective w 0, provie it is feasible. We stop the search in a particular irection after a user-specifie number of iterations (say) G = 3. For etails see Angün et al. (00). For all these steps we use common ranom numbers (CRN), in orer to better test whether the objective improves. Moreover, we test whether the other z 1 responses remain within the feasible area: we test the slack vector s, introuce in (11). These z tests use ratios instea of absolute ifferences, to avoi scale epenence. A statistical complication is that these ratios may not have finite moments. Therefore we test their meians (not their means). For these tests we use Monte Carlo sampling, which takes negligible computer time compare with the expensive simulation runs. This Monte Carlo takes (say) K = 1000 samples from the assume istributions with means an variances estimate through the simulation; in the numerical example of the next section we assume z normal istributions ignoring correlations between these responses. From these Monte Carlo samples we compute slack ratios. We formulate pessimistic null-hypotheses; that is, we accept an input combination only if it gives a significantly lower objective value an all its z 1 slacks imply a feasible solution. Actually, our interior point metho implies that a new slack value is a percentage say, 0% - of the ol slack value; our pessimistic hypotheses make our acceptable area smaller than the original feasible area in (9). After we have run G simulations along the search path, we fin a best solution - so far. Now we wish to reestimate the search irection p efine in (1). Therefore we again use a resolution-3 esign in the k factors. We still use CRN; actually, we take the same sees as we use for the very first local exploration. We save one (expensive) run by using the best combination foun so far, as one of the combinations for the esign. We stop the whole search when either the computer buget is exhauste or the search returns to an ol combination twice. When the search returns to an ol combination for the first time, we use a new set of sees. 5 MONTE CARLO EXPERIMENTS FOR MULTIPLE RSM Like in Part 1 ( 3), we stuy the novel proceure - explaine in 4 - by means of a Monte Carlo example. As in 3, we assume globally vali functions quaratic in two inputs, but now we consier three responses; moreover, we a eterministic box constraints for the two inputs: minimize E( 5 ( 1 1) + ( 5) e0 ) subject to E ( 1 3) e1 ) 4 ( + 3( ) + e ) 9 E , 1 where the noise has σ 0 = 1, σ 1 = 0.15, σ = 0.4, an correlations ρ 0; 1 = 0.6, ρ 0; = 0.3, ρ 1; = 0.1. It is easy to erive the analytical solution as = ( 1.4, 0.5) with a mean objective value of.96 approximately. We select the initial local area shown in the lower left corner of Figure 3. We run 100 macro-replicates; Figure 3 isplays the macro-replicate that gives the meian result for the objective; that is, 50% of the macro-replicates have worse objective values. In this figure we have ˆ = ( 1.46, 0.49) an an estimate objective of 5.30 approximately. Table summarizes the 100 macro-replicates, where Criterion 1 is the relative expecte objective ( E( w (, ).96)/.96 ; Criteria an 3 stan for 0 1
6 Figure 3: The Average (50 th Quantile) of 100 Estimate Solutions Table : Estimate Objective an Slacks over 100 Macro-replicates Criterion 1 Criterion Criterion 3 10 th 5 th 50 th 75 th 90 th the relative expecte slacks ( 4 E( w 1( 1, )/ 4 ( 9 E( w (, )/ 9 1 an for the first an the secon constraints. Ieally, Criterion 1 is zero; Criteria an 3 are zero if the constraints are bining at the optimum. Our heuristic tens to en at a feasible combination: the table isplays only positive s for the Criteria an 3. This feasibility is explaine by our pessimistic null hypotheses (an our small significance level α = 0.01). Our conclusion is that the heuristic reaches the esire neighborhoo of the real optimum in a relatively small number of simulation runs. Once the heuristic reaches this neighborhoo, it usually stops at a feasible point. 6 CONCLUSIONS In Part 1 of this paper we aresse the problem of searching for the simulation input combination that minimizes the output. RSM is a classic technique for tackling this problem, but it uses SD, which is scale epenent. Therefore we evise aapte SD (ASD), which corrects for the covariances of the estimate graient components. ASD is scale inepenent. Our Monte Carlo experiments emonstrate that - in general - ASD gives a better search irection than SD. In Part, we account for multiple simulation responses. We use a mathematical programming approach; that is, we minimize one ranom objective uner ranom an eterministic constraints. As in classic RSM, we locally fit functions linear in the simulation inputs. Next we apply interior point techniques to these local approximations, to estimate a search irection. This irection is scale inepenent. We take several steps into this irection, using binary search an statistical tests. Then we re- estimate these local linear functions, etc. Our Monte Carlo experiments emonstrate that our metho inee approaches the true optimum, in relatively few runs with the (expensive) simulation moel. REFERENCES Angün, E., D. Den Hertog, G. Gürkan, an J. P. C. Kleijnen. 00. Constraine response surface methoology for simulation with multiple responses. CentER Working Paper. Barnes, E. R A variation on Karmarkar s algorithm for solving linear programming problems. Mathematical Programming 36: Box, G. E. P Statistics as a catalyst to learning by scientific metho, part II - a iscussion. Journal of Quality Technology 31 (1): Box, G. E. P. an K. B. Wilson On the experimental attainment of optimum conitions. Journal of Royal Statistical Society, Series B 13 (1): Conn, A. R., N. Goul, an Ph. L. Toint Trust Region Methos. Philaelphia: SIAM. Donohue, J. M., E. C. Houck, an R. H. Myers Simulation esigns an correlation inuction for reucing orer bias in first-orer response surfaces. Operations Research 41 (5): Simulation esigns for the estimation of response surface graients in the presence of moel misspecification. Management Science 41 (): Irizarry, M., J. R. Wilson, an J. Trevino A flexible simulation tool for manufacturing-cell esign, II: response surface analysis an case stuy. IIE Transactions 33: Khuri, A. I Multiresponse surface methoology. In Hanbook of Statistics, e. S. Ghosh an C. R. Rao, Amsteram: Elsevier. Khuri, A. I. an J. A. Cornell Response Surfaces: Designs an Analyses. e. New York: Marcel Dekker.
7 Kleijnen, J. P. C Simulation an optimization in prouction planning: a case stuy. Decision Support Systems 9: Experimental esign for sensitivity analysis, optimization, an valiation of simulation moels. In Hanbook of Simulation, e. J. Banks, New York: John Wiley & Sons. Kleijnen, J. P. C., D. Den Hertog, an E. Angün. 00. Response surface methoology s steepest ascent an step size revisite. CentER Working Paper. Law, A. M. an W. D. Kelton Simulation Moeling an Analysis. 3 e. Boston: McGraw-Hill. Myers, R. H Response surface methoology: current status an future irections. Journal of Quality Technology 31 (1): Myers, R. H. an D. C. Montgomery Response Surface Methoology: Process an Prouct Optimization Using Designe Experiments. New York: John Wiley & Sons. Neermeijer, H. G., G. J. van Ootmarsum, N. Piersma, an R. Dekker A framework for response surface methoology for simulation optimization moels. In Proceeings of the 000 Winter Simulation Conference, e. J. A. Joines, R. R. Barton, K. Kang, an P. A. Fishwick, Piscataway, New Jersey: Institute of Electrical an Electronics Engineers. Ruu, P. A An Introuction to Classical Econometric Theory. New York: Oxfor University Press. Safizaeh, M. H. 00. Minimizing the bias an variance of the graient estimate in RSM simulation stuies. European Journal of Operational Research 136 (1): Spall, J. C Stochastic optimization an the simultaneous perturbation metho. In Proceeings of the 1999 Winter Simulation Conference, e. P. A. Farrington, H. B. Nembhar, D. T. Sturrock, an G. W. Ewans, Piscataway, New Jersey: Institute of Electrical an Electronics Engineers. receive a number of international fellowships an awars. His an web aress are:<kleijnen@uvt.nl> an < faculties/few/im/ staff/kleijnen/>. DICK DEN HERTOG is a Professor of Operations Research an Management Science at the Center for Economic Research (CentER), within the Faculty of Economics an Business Aministration at Tilburg University, in the Netherlans. He receive his Ph.D. (cum laue) in 199. From 199 until 1999 he was a consultant for optimization at CQM in Einhoven. His research concerns eterministic an stochastic simulation-base optimization an nonlinear programming, with applications in logistics an prouction. His an web aress are: <D.enHertog@uvt.nl> an < kub.nl/~few5/center/staff/hertog>. GÜL GÜRKAN is an Associate Professor of Operations Research an Management Science at the Center for Economic Research (CentER), within the Faculty of Economics an Business Aministration at Tilburg University, in the Netherlans. She receive her Ph.D. in Inustrial Engineering from the University of Wisconsin- Maison in Her research interests inclue simulation, mathematical programming, stochastic optimization, an equilibrium moels with applications in logistics, prouction, telecommunications, economics, an finance. She is a member of INFORMS. Her an web aress are: <ggurkan@uvt.nl> an < kub.nl/people/ggurkan/> AUTHOR BIOGRAPHIES EBRU ANGÜN is a Ph.D. stuent at the Department of Information Management at Tilburg University in the Netherlans since 000. Her aress is <M.E.Angun@uvt.nl >. JACK P.C. KLEIJNEN is a Professor of Simulation an Information Systems. His research concerns simulation, mathematical statistics, information systems, an logistics; this research resulte in six books an nearly 00 articles. He has been a consultant for several organizations in the USA an Europe, an has serve on many international eitorial boars an scientific committees. He spent several years in the USA, at both universities an companies, an
Response surface methodology revisited Angun, M.E.; Gurkan, Gul; den Hertog, Dick; Kleijnen, Jack
Tilburg University Response surface methodology revisited Angun, M.E.; Gurkan, Gul; den Hertog, Dick; Kleijnen, Jack Published in: Proceedings of the 00 Winter Simulation Conference Publication date: 00
More informationThe Role of Models in Model-Assisted and Model- Dependent Estimation for Domains and Small Areas
The Role of Moels in Moel-Assiste an Moel- Depenent Estimation for Domains an Small Areas Risto Lehtonen University of Helsini Mio Myrsylä University of Pennsylvania Carl-Eri Särnal University of Montreal
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationA Modification of the Jarque-Bera Test. for Normality
Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More informationParameter estimation: A new approach to weighting a priori information
Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a
More informationA COMPARISON OF SMALL AREA AND CALIBRATION ESTIMATORS VIA SIMULATION
SAISICS IN RANSIION new series an SURVEY MEHODOLOGY 133 SAISICS IN RANSIION new series an SURVEY MEHODOLOGY Joint Issue: Small Area Estimation 014 Vol. 17, No. 1, pp. 133 154 A COMPARISON OF SMALL AREA
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationTopic 7: Convergence of Random Variables
Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationCalculus of Variations
16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t
More informationIPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy
IPA Derivatives for Make-to-Stock Prouction-Inventory Systems With Backorers Uner the (Rr) Policy Yihong Fan a Benamin Melame b Yao Zhao c Yorai Wari Abstract This paper aresses Infinitesimal Perturbation
More informationTHE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE
Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek
More informationComputing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions
Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More information'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21
Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting
More informationThis module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics
This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: Balance Sampling for Multi-Way Stratification Contents General section... 3 1. Summary... 3 2.
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationLower Bounds for the Smoothed Number of Pareto optimal Solutions
Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.
More informationOptimization of Geometries by Energy Minimization
Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.
More informationNew Statistical Test for Quality Control in High Dimension Data Set
International Journal of Applie Engineering Research ISSN 973-456 Volume, Number 6 (7) pp. 64-649 New Statistical Test for Quality Control in High Dimension Data Set Shamshuritawati Sharif, Suzilah Ismail
More informationProof of SPNs as Mixture of Trees
A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a
More informationConnections Between Duality in Control Theory and
Connections Between Duality in Control heory an Convex Optimization V. Balakrishnan 1 an L. Vanenberghe 2 Abstract Several important problems in control theory can be reformulate as convex optimization
More informationCalculus and optimization
Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function
More informationLinear and quadratic approximation
Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationLecture 2 Lagrangian formulation of classical mechanics Mechanics
Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,
More informationA note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz
A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae
More informationON THE OPTIMALITY SYSTEM FOR A 1 D EULER FLOW PROBLEM
ON THE OPTIMALITY SYSTEM FOR A D EULER FLOW PROBLEM Eugene M. Cliff Matthias Heinkenschloss y Ajit R. Shenoy z Interisciplinary Center for Applie Mathematics Virginia Tech Blacksburg, Virginia 46 Abstract
More informationChapter 6: Energy-Momentum Tensors
49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.
More informationFLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction
FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number
More informationHyperbolic Moment Equations Using Quadrature-Based Projection Methods
Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the
More informationSurvey-weighted Unit-Level Small Area Estimation
Survey-weighte Unit-Level Small Area Estimation Jan Pablo Burgar an Patricia Dörr Abstract For evience-base regional policy making, geographically ifferentiate estimates of socio-economic inicators are
More informationThermal conductivity of graded composites: Numerical simulations and an effective medium approximation
JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationThis module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics
This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: EBLUP Unit Level for Small Area Estimation Contents General section... 3 1. Summary... 3 2. General
More informationImproving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers
International Journal of Statistics an Probability; Vol 6, No 5; September 207 ISSN 927-7032 E-ISSN 927-7040 Publishe by Canaian Center of Science an Eucation Improving Estimation Accuracy in Nonranomize
More informationTransmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency
Transmission Line Matrix (TLM network analogues of reversible trapping processes Part B: scaling an consistency Donar e Cogan * ANC Eucation, 308-310.A. De Mel Mawatha, Colombo 3, Sri Lanka * onarecogan@gmail.com
More informationA Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation
A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an
More informationDesigning of Acceptance Double Sampling Plan for Life Test Based on Percentiles of Exponentiated Rayleigh Distribution
International Journal of Statistics an Systems ISSN 973-675 Volume, Number 3 (7), pp. 475-484 Research Inia Publications http://www.ripublication.com Designing of Acceptance Double Sampling Plan for Life
More informationSystems & Control Letters
Systems & ontrol Letters ( ) ontents lists available at ScienceDirect Systems & ontrol Letters journal homepage: www.elsevier.com/locate/sysconle A converse to the eterministic separation principle Jochen
More informationVI. Linking and Equating: Getting from A to B Unleashing the full power of Rasch models means identifying, perhaps conceiving an important aspect,
VI. Linking an Equating: Getting from A to B Unleashing the full power of Rasch moels means ientifying, perhaps conceiving an important aspect, efining a useful construct, an calibrating a pool of relevant
More informationSwitching Time Optimization in Discretized Hybrid Dynamical Systems
Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set
More informationSpurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009
Spurious Significance of reatment Effects in Overfitte Fixe Effect Moels Albrecht Ritschl LSE an CEPR March 2009 Introuction Evaluating subsample means across groups an time perios is common in panel stuies
More informationOptimal CDMA Signatures: A Finite-Step Approach
Optimal CDMA Signatures: A Finite-Step Approach Joel A. Tropp Inst. for Comp. Engr. an Sci. (ICES) 1 University Station C000 Austin, TX 7871 jtropp@ices.utexas.eu Inerjit. S. Dhillon Dept. of Comp. Sci.
More information19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control
19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior
More informationEVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES
MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117 1130 S 0025-5718(00)01120-0 Article electronically publishe on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION
More informationA Review of Multiple Try MCMC algorithms for Signal Processing
A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications
More informationRegularized extremal bounds analysis (REBA): An approach to quantifying uncertainty in nonlinear geophysical inverse problems
GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L03304, oi:10.1029/2008gl036407, 2009 Regularize extremal bouns analysis (REBA): An approach to quantifying uncertainty in nonlinear geophysical inverse problems
More informationAPPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France
APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation
More informationOptimal Variable-Structure Control Tracking of Spacecraft Maneuvers
Optimal Variable-Structure Control racking of Spacecraft Maneuvers John L. Crassiis 1 Srinivas R. Vaali F. Lanis Markley 3 Introuction In recent years, much effort has been evote to the close-loop esign
More informationExpected Value of Partial Perfect Information
Expecte Value of Partial Perfect Information Mike Giles 1, Takashi Goa 2, Howar Thom 3 Wei Fang 1, Zhenru Wang 1 1 Mathematical Institute, University of Oxfor 2 School of Engineering, University of Tokyo
More informationLinear First-Order Equations
5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)
More informationinflow outflow Part I. Regular tasks for MAE598/494 Task 1
MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the
More informationTest of Hypotheses in a Time Trend Panel Data Model with Serially Correlated Error Component Disturbances
HE UNIVERSIY OF EXAS A SAN ANONIO, COLLEGE OF BUSINESS Working Paper SERIES Date September 25, 205 WP # 000ECO-66-205 est of Hypotheses in a ime ren Panel Data Moel with Serially Correlate Error Component
More informationDiophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations
Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything
More informationarxiv:hep-th/ v1 3 Feb 1993
NBI-HE-9-89 PAR LPTHE 9-49 FTUAM 9-44 November 99 Matrix moel calculations beyon the spherical limit arxiv:hep-th/93004v 3 Feb 993 J. Ambjørn The Niels Bohr Institute Blegamsvej 7, DK-00 Copenhagen Ø,
More informationGeneralizing Kronecker Graphs in order to Model Searchable Networks
Generalizing Kronecker Graphs in orer to Moel Searchable Networks Elizabeth Boine, Babak Hassibi, Aam Wierman California Institute of Technology Pasaena, CA 925 Email: {eaboine, hassibi, aamw}@caltecheu
More informationProblems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract
Pseuo-Time Methos for Constraine Optimization Problems Governe by PDE Shlomo Ta'asan Carnegie Mellon University an Institute for Computer Applications in Science an Engineering Abstract In this paper we
More informationInfluence of weight initialization on multilayer perceptron performance
Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -
More informationCombining Time Series and Cross-sectional Data for Current Employment Statistics Estimates 1
JSM015 - Surey Research Methos Section Combining Time Series an Cross-sectional Data for Current Employment Statistics Estimates 1 Julie Gershunskaya U.S. Bureau of Labor Statistics, Massachusetts Ae NE,
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationSimple Tests for Exogeneity of a Binary Explanatory Variable in Count Data Regression Models
Communications in Statistics Simulation an Computation, 38: 1834 1855, 2009 Copyright Taylor & Francis Group, LLC ISSN: 0361-0918 print/1532-4141 online DOI: 10.1080/03610910903147789 Simple Tests for
More informationMonotonicity for excited random walk in high dimensions
Monotonicity for excite ranom walk in high imensions Remco van er Hofsta Mark Holmes March, 2009 Abstract We prove that the rift θ, β) for excite ranom walk in imension is monotone in the excitement parameter
More informationEstimation of District Level Poor Households in the State of. Uttar Pradesh in India by Combining NSSO Survey and
Int. Statistical Inst.: Proc. 58th Worl Statistical Congress, 2011, Dublin (Session CPS039) p.6567 Estimation of District Level Poor Househols in the State of Uttar Praesh in Inia by Combining NSSO Survey
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationOne-dimensional I test and direction vector I test with array references by induction variable
Int. J. High Performance Computing an Networking, Vol. 3, No. 4, 2005 219 One-imensional I test an irection vector I test with array references by inuction variable Minyi Guo School of Computer Science
More informationModeling time-varying storage components in PSpice
Moeling time-varying storage components in PSpice Dalibor Biolek, Zenek Kolka, Viera Biolkova Dept. of EE, FMT, University of Defence Brno, Czech Republic Dept. of Microelectronics/Raioelectronics, FEEC,
More informationLeaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes
Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,
More informationAnalyzing Tensor Power Method Dynamics in Overcomplete Regime
Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical
More informationA Sketch of Menshikov s Theorem
A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p
More informationTEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE
TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,
More informationLogarithmic spurious regressions
Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate
More informationFlexible High-Dimensional Classification Machines and Their Asymptotic Properties
Journal of Machine Learning Research 16 (2015) 1547-1572 Submitte 1/14; Revise 9/14; Publishe 8/15 Flexible High-Dimensional Classification Machines an Their Asymptotic Properties Xingye Qiao Department
More informationConstruction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems
Construction of the Electronic Raial Wave Functions an Probability Distributions of Hyrogen-like Systems Thomas S. Kuntzleman, Department of Chemistry Spring Arbor University, Spring Arbor MI 498 tkuntzle@arbor.eu
More informationTractability results for weighted Banach spaces of smooth functions
Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March
More informationθ x = f ( x,t) could be written as
9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)
More informationTHE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS
THE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS BY ANDREW F. MAGYAR A issertation submitte to the Grauate School New Brunswick Rutgers,
More informationConservation laws a simple application to the telegraph equation
J Comput Electron 2008 7: 47 51 DOI 10.1007/s10825-008-0250-2 Conservation laws a simple application to the telegraph equation Uwe Norbrock Reinhol Kienzler Publishe online: 1 May 2008 Springer Scienceusiness
More informationCenter of Gravity and Center of Mass
Center of Gravity an Center of Mass 1 Introuction. Center of mass an center of gravity closely parallel each other: they both work the same way. Center of mass is the more important, but center of gravity
More informationCascaded redundancy reduction
Network: Comput. Neural Syst. 9 (1998) 73 84. Printe in the UK PII: S0954-898X(98)88342-5 Cascae reunancy reuction Virginia R e Sa an Geoffrey E Hinton Department of Computer Science, University of Toronto,
More informationSlide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)
Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to
More informationCopyright 2015 Quintiles
fficiency of Ranomize Concentration-Controlle Trials Relative to Ranomize Dose-Controlle Trials, an Application to Personalize Dosing Trials Russell Reeve, PhD Quintiles, Inc. Copyright 2015 Quintiles
More informationQubit channels that achieve capacity with two states
Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March
More informationarxiv: v2 [cs.ds] 11 May 2016
Optimizing Star-Convex Functions Jasper C.H. Lee Paul Valiant arxiv:5.04466v2 [cs.ds] May 206 Department of Computer Science Brown University {jasperchlee,paul_valiant}@brown.eu May 3, 206 Abstract We
More informationSummary: Differentiation
Techniques of Differentiation. Inverse Trigonometric functions The basic formulas (available in MF5 are: Summary: Differentiation ( sin ( cos The basic formula can be generalize as follows: Note: ( sin
More informationensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y
Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay
More informationDissipative numerical methods for the Hunter-Saxton equation
Dissipative numerical methos for the Hunter-Saton equation Yan Xu an Chi-Wang Shu Abstract In this paper, we present further evelopment of the local iscontinuous Galerkin (LDG) metho esigne in [] an a
More informationPolynomial Inclusion Functions
Polynomial Inclusion Functions E. e Weert, E. van Kampen, Q. P. Chu, an J. A. Muler Delft University of Technology, Faculty of Aerospace Engineering, Control an Simulation Division E.eWeert@TUDelft.nl
More informationSparse Reconstruction of Systems of Ordinary Differential Equations
Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA
More informationOn the Surprising Behavior of Distance Metrics in High Dimensional Space
On the Surprising Behavior of Distance Metrics in High Dimensional Space Charu C. Aggarwal, Alexaner Hinneburg 2, an Daniel A. Keim 2 IBM T. J. Watson Research Center Yortown Heights, NY 0598, USA. charu@watson.ibm.com
More informationLinear Regression with Limited Observation
Ela Hazan Tomer Koren Technion Israel Institute of Technology, Technion City 32000, Haifa, Israel ehazan@ie.technion.ac.il tomerk@cs.technion.ac.il Abstract We consier the most common variants of linear
More informationOptimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations
Optimize Schwarz Methos with the Yin-Yang Gri for Shallow Water Equations Abessama Qaouri Recherche en prévision numérique, Atmospheric Science an Technology Directorate, Environment Canaa, Dorval, Québec,
More informationMcMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube.
McMaster University Avance Optimization Laboratory Title: The Central Path Visits all the Vertices of the Klee-Minty Cube Authors: Antoine Deza, Eissa Nematollahi, Reza Peyghami an Tamás Terlaky AvOl-Report
More informationA Novel Unknown-Input Estimator for Disturbance Estimation and Compensation
A Novel Unknown-Input Estimator for Disturbance Estimation an Compensation Difan ang Lei Chen Eric Hu School of Mechanical Engineering he University of Aelaie Aelaie South Australia 5005 Australia leichen@aelaieeuau
More informationPerturbation Analysis and Optimization of Stochastic Flow Networks
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. Y, MMM 2004 1 Perturbation Analysis an Optimization of Stochastic Flow Networks Gang Sun, Christos G. Cassanras, Yorai Wari, Christos G. Panayiotou,
More informationA. Exclusive KL View of the MLE
A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function
More informationA variance decomposition and a Central Limit Theorem for empirical losses associated with resampling designs
Mathias Fuchs, Norbert Krautenbacher A variance ecomposition an a Central Limit Theorem for empirical losses associate with resampling esigns Technical Report Number 173, 2014 Department of Statistics
More informationChapter 2 Lagrangian Modeling
Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie
More information